翻訳と辞書
Words near each other
・ "O" Is for Outlaw
・ "O"-Jung.Ban.Hap.
・ "Ode-to-Napoleon" hexachord
・ "Oh Yeah!" Live
・ "Our Contemporary" regional art exhibition (Leningrad, 1975)
・ "P" Is for Peril
・ "Pimpernel" Smith
・ "Polish death camp" controversy
・ "Pro knigi" ("About books")
・ "Prosopa" Greek Television Awards
・ "Pussy Cats" Starring the Walkmen
・ "Q" Is for Quarry
・ "R" Is for Ricochet
・ "R" The King (2016 film)
・ "Rags" Ragland
・ ! (album)
・ ! (disambiguation)
・ !!
・ !!!
・ !!! (album)
・ !!Destroy-Oh-Boy!!
・ !Action Pact!
・ !Arriba! La Pachanga
・ !Hero
・ !Hero (album)
・ !Kung language
・ !Oka Tokat
・ !PAUS3
・ !T.O.O.H.!
・ !Women Art Revolution


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

differential evolution : ウィキペディア英語版
differential evolution
In evolutionary computation, differential evolution (DE) is a method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. Such methods are commonly known as metaheuristics as they make few or no assumptions about the problem being optimized and can search very large spaces of candidate solutions. However, metaheuristics such as DE do not guarantee an optimal solution is ever found.
DE is used for multidimensional real-valued functions but does not use the gradient of the problem being optimized, which means DE does not require for the optimization problem to be differentiable as is required by classic optimization methods such as gradient descent and quasi-newton methods. DE can therefore also be used on optimization problems that are not even continuous, are noisy, change over time, etc.〔
DE optimizes a problem by maintaining a population of candidate solutions and creating new candidate solutions by combining existing ones according to its simple formulae, and then keeping whichever candidate solution has the best score or fitness on the optimization problem at hand. In this way the optimization problem is treated as a black box that merely provides a measure of quality given a candidate solution and the gradient is therefore not needed.
DE is originally due to Storn and Price.〔〔 Books have been published on theoretical and practical aspects of using DE in parallel computing, multiobjective optimization, constrained optimization, and the books also contain surveys of application areas.〔〔〔
== Algorithm ==

A basic variant of the DE algorithm works by having a population of candidate solutions (called agents). These agents are moved around in the search-space by using simple mathematical formulae to combine the positions of existing agents from the population. If the new position of an agent is an improvement it is accepted and forms part of the population, otherwise the new position is simply discarded. The process is repeated and by doing so it is hoped, but not guaranteed, that a satisfactory solution will eventually be discovered.
Formally, let f: \Bbb^n \to \Bbb be the cost function which must be minimized or fitness function which must be maximized. The function takes a candidate solution as argument in the form of a vector of real numbers and produces a real number as output which indicates the fitness of the given candidate solution. The gradient of f is not known. The goal is to find a solution m for which f(m) \leq f(p) for all p in the search-space, which would mean m is the global minimum. Maximization can be performed by considering the function h := -f instead.
Let \mathbf \in \Bbb^n designate a candidate solution (agent) in the population. The basic DE algorithm can then be described as follows:
* Initialize all agents \mathbf with random positions in the search-space.
* Until a termination criterion is met (e.g. number of iterations performed, or adequate fitness reached), repeat the following:
*
* For each agent \mathbf in the population do:
*
*
* Pick three agents \mathbf,\mathbf, and \mathbf from the population at random, they must be distinct from each other as well as from agent \mathbf
*
*
* Pick a random index R \in \ (n being the dimensionality of the problem to be optimized).
*
*
* Compute the agent's potentially new position \mathbf = (\ldots, y_n ) as follows:
*
*
*
* For each i, pick a uniformly distributed number r_i \equiv U(0,1)
*
*
*
* If r_i < \text or i = R then set y_i = a_i + F \times (b_i-c_i) otherwise set y_i = x_i
*
*
*
* (In essence, the new position is outcome of binary crossover of agent \mathbf with intermediate agent \mathbf = \mathbf + F \times (\mathbf-\mathbf).)
*
*
* If f(\mathbf) < f(\mathbf) then replace the agent in the population with the improved candidate solution, that is, replace \mathbf with \mathbf in the population.
* Pick the agent from the population that has the highest fitness or lowest cost and return it as the best found candidate solution.
Note that F \in () is called the ''differential weight'' and \text \in () is called the ''crossover probability'', both these parameters are selectable by the practitioner along with the population size \text \geq 4 see below.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「differential evolution」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.